We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors. Since human motions are highly diverse and have a property of quite different distribution from conditional modalities, such as textual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences. Besides, the raw motion data from the motion capture system might be redundant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises. To learn a better representation of the various human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representative and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences.
translated by 谷歌翻译
Recently, human pose estimation mainly focuses on how to design a more effective and better deep network structure as human features extractor, and most designed feature extraction networks only introduce the position of each anatomical keypoint to guide their training process. However, we found that some human anatomical keypoints kept their topology invariance, which can help to localize them more accurately when detecting the keypoints on the feature map. But to the best of our knowledge, there is no literature that has specifically studied it. Thus, in this paper, we present a novel 2D human pose estimation method with explicit anatomical keypoints structure constraints, which introduces the topology constraint term that consisting of the differences between the distance and direction of the keypoint-to-keypoint and their groundtruth in the loss object. More importantly, our proposed model can be plugged in the most existing bottom-up or top-down human pose estimation methods and improve their performance. The extensive experiments on the benchmark dataset: COCO keypoint dataset, show that our methods perform favorably against the most existing bottom-up and top-down human pose estimation methods, especially for Lite-HRNet, when our model is plugged into it, its AP scores separately raise by 2.9\% and 3.3\% on COCO val2017 and test-dev2017 datasets.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
Twitter机器人检测是一项重要且有意义的任务。现有的基于文本的方法可以深入分析用户推文内容,从而实现高性能。但是,新颖的Twitter机器人通过窃取真正的用户的推文并用良性推文稀释恶意内容来逃避这些检测。这些新颖的机器人被认为以语义不一致的特征。此外,最近出现了利用Twitter图结构的方法,显示出巨大的竞争力。但是,几乎没有一种方法使文本和图形模式深入融合并进行了交互,以利用优势并了解两种方式的相对重要性。在本文中,我们提出了一个名为BIC的新型模型,该模型使文本和图形模式深入互动并检测到推文语义不一致。具体而言,BIC包含一个文本传播模块,一个图形传播模块,可分别在文本和图形结构上进行机器人检测,以及可证明有效的文本互动模块,以使两者相互作用。此外,BIC还包含一个语义一致性检测模块,以从推文中学习语义一致性信息。广泛的实验表明,我们的框架在全面的Twitter机器人基准上优于竞争基准。我们还证明了拟议的相互作用和语义一致性检测的有效性。
translated by 谷歌翻译
知识图嵌入(KGE)旨在将实体和关系映射到低维空间,并成为知识图完成的\ textit {de-facto}标准。大多数现有的KGE方法都受到稀疏挑战的困扰,在这种挑战中,很难预测在知识图中频繁的实体。在这项工作中,我们提出了一个新颖的框架KRACL,以减轻具有图表和对比度学习的KG中广泛的稀疏性。首先,我们建议知识关系网络(KRAT)通过同时将相邻的三元组投射到不同的潜在空间,并通过注意机制共同汇总信息来利用图形上下文。 KRAT能够捕获不同上下文三联的微妙的语义信息和重要性,并利用知识图中的多跳信息。其次,我们通过将对比度损失与跨熵损失相结合,提出知识对比损失,这引入了更多的负样本,从而丰富了对稀疏实体的反馈。我们的实验表明,KRACL在各种标准知识基准中取得了卓越的结果,尤其是在WN18RR和NELL-995上,具有大量低级内实体。广泛的实验还具有KRACL在处理稀疏知识图和鲁棒性三元组的鲁棒性方面的有效性。
translated by 谷歌翻译
Twitter机器人检测已成为打击错误信息,促进社交媒体节制并保持在线话语的完整性的越来越重要的任务。最先进的机器人检测方法通常利用Twitter网络的图形结构,在面对传统方法无法检测到的新型Twitter机器人时,它们表现出令人鼓舞的性能。但是,现有的Twitter机器人检测数据集很少是基于图形的,即使这些基于图形的数据集也遭受有限的数据集量表,不完整的图形结构以及低注释质量。实际上,缺乏解决这些问题的大规模基于图的Twitter机器人检测基准,严重阻碍了基于图形的机器人检测方法的开发和评估。在本文中,我们提出了Twibot-22,这是一个综合基于图的Twitter机器人检测基准,它显示了迄今为止最大的数据集,在Twitter网络上提供了多元化的实体和关系,并且与现有数据集相比具有更好的注释质量。此外,我们重新实施35代表性的Twitter机器人检测基线,并在包括Twibot-22在内的9个数据集上进行评估,以促进对模型性能和对研究进度的整体了解的公平比较。为了促进进一步的研究,我们将所有实施的代码和数据集巩固到Twibot-22评估框架中,研究人员可以在其中始终如一地评估新的模型和数据集。 Twibot-22 Twitter机器人检测基准和评估框架可在https://twibot22.github.io/上公开获得。
translated by 谷歌翻译
在过去几年中,深度卷积神经网络在低光图像增强中取得了令人印象深刻的成功。深度学习方法大多通过堆叠网络结构并加深网络深度来提高特征提取的能力。在单个时导致更多的运行时间成本为了减少推理时间,在完全提取本地特征和全局特征的同时,我们通过SGN定期,我们提出了基于广泛的自我引导网络(Absgn)的现实世界低灯图像增强。策略是一种广泛的策略处理不同曝光的噪音。所提出的网络被许多主流基准验证.Aditional实验结果表明,所提出的网络优于最先进的低光图像增强解决方案。
translated by 谷歌翻译
建模政治参与者的思想视角是许多下游任务中的应用中的计算政治科学的重要任务。现有方法通常限于文本数据和投票记录,而他们忽视了丰富的社会背景和对整体评价的宝贵专家知识。在本文中,我们提出了一个代表性学习框架,政治行为者共同利用了社会背景和专家知识。具体而言,我们检索和提取关于立法者的事实陈述,以利用社会背景信息。然后,我们构建异构信息网络以合并社会背景并使用关系图形神经网络来学习立法器表示。最后,我们用三个目标训练我们的模型,以与专家知识,模型意识形态阶段一致性,模拟回声室现象的表现学习。广泛的实验表明,我们的学到的陈述在三个下游任务中成功地推动了最先进的。进一步分析证明了学到的立法者代表与各种社会政治因素之间的相关性,以及建立了建模政治行动者的社会背景和专业知识的必要性。
translated by 谷歌翻译
识别新闻媒体的政治观点已成为政治评论的快速增长和日益极化的政治意识形态的重要任务。以前的方法专注于文本内容,留出富裕的社会和政治背景,这在论证挖掘过程中至关重要。为了解决这一限制,我们提出了一种政治透视检测方法,包括外部域知识。具体而言,我们构建一个政治知识图形,以作为特定于域的外部知识。然后我们利用异质信息网络来代表新闻文件,共同模仿新闻文本和外部知识。最后,我们采用关系图神经网络,并作为图形级分类进行政治视角检测。广泛的实验表明,我们的方法始终如一地实现了两个现实世界的透视检测基准的最佳性能。消融研究进一步承担了外部知识的必要性以及我们基于图形的方法的有效性。
translated by 谷歌翻译
迭代线性二次调节器(ILQR)在解决非线性系统模型的轨迹优化问题方面已广泛普及。但是,作为一种基于模型的拍摄方法,它在很大程度上依赖于准确的系统模型来更新最佳控制动作和通过正向集成确定的轨迹,从而变得容易受到不可避免的模型的影响。最近,针对最佳控制问题的基于学习的方法进行的大量研究工作在解决未知系统模型方面已经取得了显着发展,尤其是当系统与环境具有复杂的相互作用时。然而,通常需要一个深层的神经网络来拟合大量的采样数据。在这项工作中,我们提出了神经-ILQR,这是一种在不受约束的控制空间上进行学习的拍摄方法,其中使用简单结构的神经网络代表局部系统模型。在此框架中,通过同时完善最佳策略和神经网络迭代,可以实现轨迹优化任务,而无需依靠系统模型的先验知识。通过对两项说明性控制任务的全面评估,在系统模型中存在不准确性的情况下,提出的方法显示出胜过常规ILQR。
translated by 谷歌翻译